Discriminative training of HMM using maximum normalized likelihood algorithm

نویسندگان

  • Konstantin Markov
  • Seiichi Nakagawa
  • Satoshi Nakamura
چکیده

In this paper, we present the Maximum Normalized Likelihood Estimation (MNLE) algorithm and its application for discriminative training of HMMs for continuous speech recognition. The objective of this algorithm is to maximize the normalized frame likelihood of training data. Instead of gradient descent techniques usually applied for objective function optimization in other discriminative algorithms such as Minimum Classification Error (MCE) and Maximum Mutual Information (MMI), we used a modified Expectation-Maximization (EM) algorithm which greatly simplifies and speeds up the training procedure. Evaluation experiments showed better recognition rates compared to both the Maximum Likelihood (ML) training method and MCE/GPD discriminative method. In addition,the MNLE algorithm showed better generalization abilities and was faster than MCE/GPD.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Discriminative training of GMM using a modified EM algorithm for speaker recognition

In this paper, we present a new discriminative training method for Gaussian Mixture Models (GMM) and its application for the text-independent speaker recognition. The objective of this method is to maximize the frame level normalized likelihoods of the training data. That is why we call it the Maximum Normalized Likelihood Estimation (MNLE). In contrast to other discriminative algorithms, the o...

متن کامل

Training Discriminative HMM by Optimal Allocation of Gaussian Kernels

We propose to train Hidden Markov Model (HMM) by allocating Gaussian kernels non-uniformly across states so as to optimize a selected discriminative training criterion. The optimal kernel allocation problem is first formulated based upon a non-discriminative, Maximum Likelihood (ML) criterion and then generalized to incorporate discriminative ones. An effective kernel exchange algorithm is deri...

متن کامل

Error - weighted discriminative training for HMM parameter estimation

Optimizing discriminative objectives in HMM parameter training proved to outperform Maximum Likelihood-based parameter estimation in numerous studies. This paper extends the Maximum Mutual Information objective by applying utterance specific weighting factors that are adjusted for minimum sentence error. In addition to that, the paper investigates tuning separate numerator and denominator weigh...

متن کامل

The trended HMM with discriminative training for phonetic classification

In this paper, we extend the Maximum Likelihood (ML) training algorithm to the Minimum classificnrtion Emor (MCE) training algorithm for optimal estimation of the state-dependent polynomial coefficients in the trended HMM [2]. The problem of automatic speech recognition is viewed as a discriminative dynamic data-fitting problem, where nlotiue (not absolute) closeness in fitting an array of dyna...

متن کامل

Speech Trajectory Discrimination Using the Minimum Classi cationError

In this paper, we extend the Maximum Likelihood (ML) training algorithm to the Minimum Classiica-tion Error (MCE) training algorithm for discriminatively estimating the state-dependent polynomial coeecients in the stochastic trajectory model or the trended HMM originally proposed in 2]. The main motivation of this extension is the new model space for smoothness-constrained, state-bound speech t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001